Generalization Error of Randomized Linear Zero Empirical Error Classifier: Non-Centered Data Case
نویسنده
چکیده
One of the main problems in pattern classification and neural network training theory is the generalization performance of learning. This paper extends the results on randomized linear zero empirical error (RLZEE) classifier obtained by Raudys, Dičiūnas and Basalykas for the case of centered multivariate spherical normal classes. We derive an exact formula for an expected probability of misclassification (PMC) of RLZEE classifier in a case of arbitrary (centered or noncentered) spherical normal classes. This formula depends on two parameters characterizing the “degree of non-centering” of data. We discuss theoretically and illustrate graphically and numerically the influence of these parameters on the PMC of RLZEE classifier. In particular, we show that in some cases non-centered data has smaller expected PMC than centered data.
منابع مشابه
Generalization Error of Randomized Linear Zero Empirical Error Classifier: Simple Asymptotics for Centered Data Case
An estimation of the generalization performance of classifier is one of most important problems in pattern clasification and neural network training theory. In this paper we estimate the generalization error (mean expected probability of classification) for randomized linear zero empirical error (RLZEE) classifier which was considered by Raudys, Dičiūnas and Basalykas. Instead of “non-explicit”...
متن کاملOn Dimensionality, Sample Size, and Classification Error of Nonparametric Linear Classification Algorithms
This paper compares two nonparametric linear classification algorithms—the zero empirical error classifier and the maximum margin classifier—with parametric linear classifiers designed to classify multivariate Gaussian populations [7]. Formulae and a table for the mean expected probability of misclassification MEPN are presented. They show that the classification error is mainly determined by N...
متن کاملExpected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix
The pseudo-Fisher linear classifier is considered as the ‘‘diagonal’’ Fisher linear classifier applied to the principal components corresponding to non-zero eigenvalues of the sample covariance matrix. An asymptotic formula for the expected Ž . generalization error of the Fisher classifier with the pseudo-inversion is derived which explains the peaking behaviour: with an increasing number of le...
متن کاملStructural Risk Minimization for Character Recognition
The method of Structural Risk Minimization refers to tuning the capacity of the classifier to the available amount of training data. This capacity is influenced by several factors, including: (1) properties of the input space, (2) nature and structure of the classifier, and (3) learning algorithm. Actions based on these three factors are combined here to control the capacity of linear classifie...
متن کاملSharp Generalization Error Bounds for Randomly-projected Classifiers
We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomlyprojected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels?’...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Informatica, Lith. Acad. Sci.
دوره 12 شماره
صفحات -
تاریخ انتشار 2001